1 research outputs found
Multi-Task Active Learning for Neural Semantic Role Labeling on Low Resource Conversational Corpus
Most Semantic Role Labeling (SRL) approaches are supervised methods which
require a significant amount of annotated corpus, and the annotation requires
linguistic expertise. In this paper, we propose a Multi-Task Active Learning
framework for Semantic Role Labeling with Entity Recognition (ER) as the
auxiliary task to alleviate the need for extensive data and use additional
information from ER to help SRL. We evaluate our approach on Indonesian
conversational dataset. Our experiments show that multi-task active learning
can outperform single-task active learning method and standard multi-task
learning. According to our results, active learning is more efficient by using
12% less of training data compared to passive learning in both single-task and
multi-task setting. We also introduce a new dataset for SRL in Indonesian
conversational domain to encourage further research in this area.Comment: ACL 2018 workshop on Deep Learning Approaches for Low-Resource NL